Skip to content

Conversation

@trinity-1686a
Copy link
Contributor

Description

part of work on #5445
small survey of our caches:

  • shortlived_cache: no eviction policy (bulk dropped on end of request)
  • partial_request_cache: previously LRU, now LRU or LFU
  • fd_cache_metrics: previously LRU, stays LRU. Given how this cache is used, i don't think anything else makes sense
  • fast_field_cache: previously LRU, now LRU or LFU
  • split_footer_cache: previously LRU, now LRU or LFU
  • searcher_split_cache: previously LRU, still LRU. The cache policy logic is intertwined with how it background-fill itself, modifying it will require more work.

I chose to use moka because i believe it should be possible to re-express the searcher_split_cache using it, which wasn't the case for other most others crates i looked into. Its implementation is a tiny-lfu backed by an lru (essentially an lru with an lfu-based admission policy). This should behave well on changing workload (lru) while ignoring one hit wonders (lfu).

How was this PR tested?

updated tests. tested manually. Behavior on actual load will likely depend on the workload

@github-actions
Copy link

github-actions bot commented Oct 2, 2024

On SSD:

Average search latency is 1.01x that of the reference (lower is better).
Ref run id: 3700, ref commit: 2dcc696
Link

On GCS:

Average search latency is 1.01x that of the reference (lower is better).
Ref run id: 3701, ref commit: 2dcc696
Link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants